21 research outputs found
The Effect of Interface Elements on Transcription Tasks to Reduce Number-Entry Errors
Many tasks in daily life require transcribing information accurately from one medium to the other. However, humans make errors frequently. While most of the errors in daily life are little more than an inconvenience in safety- critical domains, such as healthcare, a small error like typing the wrong number when programming a medical device can have grave consequences. Despite potentially fatal consequences of errors, little is known about the errors people make using medical devices, such as infusion pumps, and how the devices themselves influence the errors made. This thesis reports ten studies looking at different interface design features in the context of medical devices and their potential influence on reducing errors.
The first three studies empirically evaluated particular design features pro- posed by previous work. These studies did not produce the predicted error reduction although there was a recurrent low rate of errors in all the studies. Therefore such claims made in earlier work could not be supported. The studies however showed that it is hard to see the impact of particular features due to the nature of the research area and due to the robustness of the evaluated interface. Additionally, interesting insights into how people use such interfaces to enter numbers need to be taken into account.
Inspired by results from cognitive psychology, which suggest that representing information in a poorer quality format increases the likelihood of memorising the information more accurately, a further set of seven experiments are presented in this work evaluating the effect of such an approach on number transcription tasks. Results showed that people made significantly fewer errors when transcribing less visible numbers as well as text. Furthermore, the studies also confirmed that the source displaying the number and not the entry display is responsible for this counter-intuitive approach to reduce errors. This aligns with previous work in human-computer inter- action and psychology. Moreover, the robustness of the discovered effect was investigated once levels of audio distractions were present. Even in a distracting environment the effect led to a significant decrease in errors.
The potential impact of this work could be a valuable contribution for do- mains where accuracy is of great importance
Utilizing Deep Reinforcement Learning to Effect Autonomous Orbit Transfers and Intercepts via Electromagnetic Propulsion
Problem: The growth in space-capable entities has caused a rapid rise in the number of derelict satellites and space debris in orbit around Earth, which pose a significant navigation hazard.
Objectives: Develop a system that is capable of autonomously neutralizing multiple pieces of space debris in various orbits
Generating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization
We consider the problem of automatically generating a narrative biomedical
evidence summary from multiple trial reports. We evaluate modern neural models
for abstractive summarization of relevant article abstracts from systematic
reviews previously conducted by members of the Cochrane collaboration, using
the authors conclusions section of the review abstract as our target. We enlist
medical professionals to evaluate generated summaries, and we find that modern
summarization systems yield consistently fluent and relevant synopses, but that
they are not always factual. We propose new approaches that capitalize on
domain-specific models to inform summarization, e.g., by explicitly demarcating
snippets of inputs that convey key findings, and emphasizing the reports of
large and high-quality trials. We find that these strategies modestly improve
the factual accuracy of generated summaries. Finally, we propose a new method
for automatically evaluating the factuality of generated narrative evidence
syntheses using models that infer the directionality of reported findings.Comment: 11 pages, 2 figures. Accepted for presentation at the 2021 AMIA
Informatics Summi
Forecast-Based Interference : Modelling Multicore Interference from Observable Factors
While there is significant interest in the use of COTS multicore platforms for Real-time Systems, there has been very little in terms of practical methods to calculate the interference multiplier (i.e. the increase in execution time due to interference) between tasks on such systems. COTS multicore platforms present two distinct challenges: firstly, the variable interference between tasks competing for shared resources such as cache, and secondly the complexity of the hardware mechanisms and policies used, which may result in a system which is very difficult if not impossible to analyse; assuming that the exact details of the hardware are even disclosed! This paper proposes a new technique, Forecast-Based Interference analysis, which mitigates both of these issues by combining measurement-based techniques with statistical techniques and forecast modelling to enable the prediction of an interference multiplier for a given set of tasks, in an automated and reliable manner. The combination of execution times and interference multipliers can be used both in the design, e.g. for specifying timing watchdogs, and analysis, e.g. verifying schedulability
An Ensemble of Bayesian Neural Networks for Exoplanetary Atmospheric Retrieval
Machine learning is now used in many areas of astrophysics, from detecting
exoplanets in Kepler transit signals to removing telescope systematics. Recent
work demonstrated the potential of using machine learning algorithms for
atmospheric retrieval by implementing a random forest to perform retrievals in
seconds that are consistent with the traditional, computationally-expensive
nested-sampling retrieval method. We expand upon their approach by presenting a
new machine learning model, \texttt{plan-net}, based on an ensemble of Bayesian
neural networks that yields more accurate inferences than the random forest for
the same data set of synthetic transmission spectra. We demonstrate that an
ensemble provides greater accuracy and more robust uncertainties than a single
model. In addition to being the first to use Bayesian neural networks for
atmospheric retrieval, we also introduce a new loss function for Bayesian
neural networks that learns correlations between the model outputs.
Importantly, we show that designing machine learning models to explicitly
incorporate domain-specific knowledge both improves performance and provides
additional insight by inferring the covariance of the retrieved atmospheric
parameters. We apply \texttt{plan-net} to the Hubble Space Telescope Wide Field
Camera 3 transmission spectrum for WASP-12b and retrieve an isothermal
temperature and water abundance consistent with the literature. We highlight
that our method is flexible and can be expanded to higher-resolution spectra
and a larger number of atmospheric parameters
Accurate Machine Learning Atmospheric Retrieval via a Neural Network Surrogate Model for Radiative Transfer
Atmospheric retrieval determines the properties of an atmosphere based on its
measured spectrum. The low signal-to-noise ratio of exoplanet observations
require a Bayesian approach to determine posterior probability distributions of
each model parameter, given observed spectra. This inference is computationally
expensive, as it requires many executions of a costly radiative transfer (RT)
simulation for each set of sampled model parameters. Machine learning (ML) has
recently been shown to provide a significant reduction in runtime for
retrievals, mainly by training inverse ML models that predict parameter
distributions, given observed spectra, albeit with reduced posterior accuracy.
Here we present a novel approach to retrieval by training a forward ML
surrogate model that predicts spectra given model parameters, providing a fast
approximate RT simulation that can be used in a conventional Bayesian retrieval
framework without significant loss of accuracy. We demonstrate our method on
the emission spectrum of HD 189733 b and find good agreement with a traditional
retrieval from the Bayesian Atmospheric Radiative Transfer (BART) code
(Bhattacharyya coefficients of 0.9843--0.9972, with a mean of 0.9925, between
1D marginalized posteriors). This accuracy comes while still offering
significant speed enhancements over traditional RT, albeit not as much as ML
methods with lower posterior accuracy. Our method is ~9x faster per parallel
chain than BART when run on an AMD EPYC 7402P central processing unit (CPU).
Neural-network computation using an NVIDIA Titan Xp graphics processing unit is
90--180x faster per chain than BART on that CPU.Comment: 16 pages, 4 figures, submitted to PSJ 3/4/2020, revised 1/22/2021.
Text restructured and updated for clarity, model updated and expanded to work
for range of hot Jupiters, results/plots updated, two new appendices to
further justify model selection and methodolog